由于错误的自动和人类注释程序,NLP中的大型数据集遭受嘈杂的标签。我们研究了标签噪声的文本分类问题,并旨在通过分类器上通过辅助噪声模型捕获这种噪声。我们首先将概率得分分配给每个训练样本,通过训练早期纪要的损失的β混合模型来分配嘈杂的标签。然后,我们使用这个分数来选择性地引导噪声模型和分类器的学习。我们对两种文本分类任务的实证评估表明,我们的方法可以改善基线精度,并防止对噪声过度接近。
translated by 谷歌翻译
Automatically estimating 3D skeleton, shape, camera viewpoints, and part articulation from sparse in-the-wild image ensembles is a severely under-constrained and challenging problem. Most prior methods rely on large-scale image datasets, dense temporal correspondence, or human annotations like camera pose, 2D keypoints, and shape templates. We propose Hi-LASSIE, which performs 3D articulated reconstruction from only 20-30 online images in the wild without any user-defined shape or skeleton templates. We follow the recent work of LASSIE that tackles a similar problem setting and make two significant advances. First, instead of relying on a manually annotated 3D skeleton, we automatically estimate a class-specific skeleton from the selected reference image. Second, we improve the shape reconstructions with novel instance-specific optimization strategies that allow reconstructions to faithful fit on each instance while preserving the class-specific priors learned across all images. Experiments on in-the-wild image ensembles show that Hi-LASSIE obtains higher quality state-of-the-art 3D reconstructions despite requiring minimum user input.
translated by 谷歌翻译
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译
Creativity is an indispensable part of human cognition and also an inherent part of how we make sense of the world. Metaphorical abstraction is fundamental in communicating creative ideas through nuanced relationships between abstract concepts such as feelings. While computer vision benchmarks and approaches predominantly focus on understanding and generating literal interpretations of images, metaphorical comprehension of images remains relatively unexplored. Towards this goal, we introduce MetaCLUE, a set of vision tasks on visual metaphor. We also collect high-quality and rich metaphor annotations (abstract objects, concepts, relationships along with their corresponding object boxes) as there do not exist any datasets that facilitate the evaluation of these tasks. We perform a comprehensive analysis of state-of-the-art models in vision and language based on our annotations, highlighting strengths and weaknesses of current approaches in visual metaphor Classification, Localization, Understanding (retrieval, question answering, captioning) and gEneration (text-to-image synthesis) tasks. We hope this work provides a concrete step towards developing AI systems with human-like creative capabilities.
translated by 谷歌翻译
Large-scale diffusion models have achieved state-of-the-art results on text-to-image synthesis (T2I) tasks. Despite their ability to generate high-quality yet creative images, we observe that attribution-binding and compositional capabilities are still considered major challenging issues, especially when involving multiple objects. In this work, we improve the compositional skills of T2I models, specifically more accurate attribute binding and better image compositions. To do this, we incorporate linguistic structures with the diffusion guidance process based on the controllable properties of manipulating cross-attention layers in diffusion-based T2I models. We observe that keys and values in cross-attention layers have strong semantic meanings associated with object layouts and content. Therefore, we can better preserve the compositional semantics in the generated image by manipulating the cross-attention representations based on linguistic insights. Built upon Stable Diffusion, a SOTA T2I model, our structured cross-attention design is efficient that requires no additional training samples. We achieve better compositional skills in qualitative and quantitative results, leading to a 5-8% advantage in head-to-head user comparison studies. Lastly, we conduct an in-depth analysis to reveal potential causes of incorrect image compositions and justify the properties of cross-attention layers in the generation process.
translated by 谷歌翻译
With the increasing use of Graph Neural Networks (GNNs) in critical real-world applications, several post hoc explanation methods have been proposed to understand their predictions. However, there has been no work in generating explanations on the fly during model training and utilizing them to improve the expressive power of the underlying GNN models. In this work, we introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing), which aggregates only embeddings from nodes and edges identified as important by a GNN explanation method. EXPASS can be used with any existing GNN architecture and subgraph-optimizing explainer to learn accurate graph embeddings. We theoretically show that EXPASS alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of Dirichlet energy and that the embedding difference between the vanilla message passing and EXPASS framework can be upper bounded by the difference of their respective model weights. Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs, opening up new frontiers in graph machine learning to develop explanation-based training frameworks.
translated by 谷歌翻译
Deep learning approaches for spatio-temporal prediction problems such as crowd-flow prediction assumes data to be of fixed and regular shaped tensor and face challenges of handling irregular, sparse data tensor. This poses limitations in use-case scenarios such as predicting visit counts of individuals' for a given spatial area at a particular temporal resolution using raster/image format representation of the geographical region, since the movement patterns of an individual can be largely restricted and localized to a certain part of the raster. Additionally, current deep-learning approaches for solving such problem doesn't account for the geographical awareness of a region while modelling the spatio-temporal movement patterns of an individual. To address these limitations, there is a need to develop a novel strategy and modeling approach that can handle both sparse, irregular data while incorporating geo-awareness in the model. In this paper, we make use of quadtree as the data structure for representing the image and introduce a novel geo-aware enabled deep learning layer, GA-ConvLSTM that performs the convolution operation based on a novel geo-aware module based on quadtree data structure for incorporating spatial dependencies while maintaining the recurrent mechanism for accounting for temporal dependencies. We present this approach in the context of the problem of predicting spatial behaviors of an individual (e.g., frequent visits to specific locations) through deep-learning based predictive model, GADST-Predict. Experimental results on two GPS based trace data shows that the proposed method is effective in handling frequency visits over different use-cases with considerable high accuracy.
translated by 谷歌翻译
This paper addresses the problem of position estimation in UAVs operating in a cluttered environment where GPS information is unavailable. A model learning-based approach is proposed that takes in the rotor RPMs and past state as input and predicts the one-step-ahead position of the UAV using a novel spectral-normalized memory neural network (SN-MNN). The spectral normalization guarantees stable and reliable prediction performance. The predicted position is transformed to global coordinate frame which is then fused along with the odometry of other peripheral sensors like IMU, barometer, compass etc., using the onboard extended Kalman filter to estimate the states of the UAV. The experimental flight data collected from a motion capture facility using a micro-UAV is used to train the SN-MNN. The PX4-ECL library is used to replay the flight data using the proposed algorithm, and the estimated position is compared with actual ground truth data. The proposed algorithm doesn't require any additional onboard sensors, and is computationally light. The performance of the proposed approach is compared with the current state-of-art GPS-denied algorithms, and it can be seen that the proposed algorithm has the least RMSE for position estimates.
translated by 谷歌翻译
The upcoming exascale era will provide a new generation of physics simulations. These simulations will have a high spatiotemporal resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible. Therefore, we need to rethink the training of machine learning models for simulations for the upcoming exascale era. This work presents an approach that trains a neural network concurrently to a running simulation without storing data on a disk. The training pipeline accesses the training data by in-memory streaming. Furthermore, we apply methods from the domain of continual learning to enhance the generalization of the model. We tested our pipeline on the training of a 3d autoencoder trained concurrently to laser wakefield acceleration particle-in-cell simulation. Furthermore, we experimented with various continual learning methods and their effect on the generalization.
translated by 谷歌翻译
Visual Inertial Odometry (VIO) is one of the most established state estimation methods for mobile platforms. However, when visual tracking fails, VIO algorithms quickly diverge due to rapid error accumulation during inertial data integration. This error is typically modeled as a combination of additive Gaussian noise and a slowly changing bias which evolves as a random walk. In this work, we propose to train a neural network to learn the true bias evolution. We implement and compare two common sequential deep learning architectures: LSTMs and Transformers. Our approach follows from recent learning-based inertial estimators, but, instead of learning a motion model, we target IMU bias explicitly, which allows us to generalize to locomotion patterns unseen in training. We show that our proposed method improves state estimation in visually challenging situations across a wide range of motions by quadrupedal robots, walking humans, and drones. Our experiments show an average 15% reduction in drift rate, with much larger reductions when there is total vision failure. Importantly, we also demonstrate that models trained with one locomotion pattern (human walking) can be applied to another (quadruped robot trotting) without retraining.
translated by 谷歌翻译